This report describes the results of the first priming mindfulness study, as well as of the two online pilot studies from the Varela project.
library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(patchwork)
library(ggplot2)
library(rstatix)
library(forecast)
library(DescTools)
library(report)
source("D:/UQAM/Groupe Étudiant @Projet Remi - Documents/Etude 1 -- priming-agression/Analyses - Priming 1/Scripts R/mad functions.R")
summary(report::report(sessionInfo()))The analysis was done using the R Statistical language (v4.1.2; R Core Team, 2021) on Windows 10 x64, using the packages rstatix (v0.7.0), DescTools (v0.99.44), effectsize (v0.6.0.7), ggplot2 (v3.3.5), stringr (v1.4.0), forcats (v0.5.1), tidyr (v1.2.0), readr (v2.1.2), dplyr (v1.0.8), forecast (v8.16), rmarkdown (v2.13), tibble (v3.1.6), purrr (v0.3.4), interactions (v1.1.5), parameters (v0.17.0.5), insight (v0.17.0), performance (v0.9.0), see (v0.7.0), easystats (v0.4.3), correlation (v0.8.0), modelbased (v0.8.0), bayestestR (v0.11.5.2), report (v0.5.1), datawizard (v0.4.0), rempsyc (v0.0.2.9), patchwork (v1.1.1) and tidyverse (v1.3.1).
df <- read.csv("data/data.csv")
data <- read.csv("data/fulldataset.csv")
cat(paste("The data consists of",
report::report_participants(data),
". There are no demographics available."))The data consists of 245 participants () . There are no demographics available.
# Dummy-code group variable
data <- data %>%
mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
condition = as.factor(condition))
# Allocation ratio
cat(paste("The allocation ratio is: ",
report::report(data$condition)))The allocation ratio is: x: 2 levels, namely Control (n = 128, 52.24%) and Mindfulness (n = 117, 47.76%)
In this stage, we define a list of our relevant variables and standardize them according to the Median Absolute Deviation (MAD), which is more robust to extreme observations than standardization around the mean.
# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
"blastintensity.first", "blastduration.first", "KIMS",
"BSCS", "BAQ", "SOPT", "IAT")
# Create new variable blastintensity.duration
data$blastintensity.duration <- (data$blastintensity * data$blastduration)
# Divide by 2? Do some other sort of transformation given I multiplied two scores?
# Should I multiply them after standardization or before?
# Standardize and center main continuous IV variable (based on MAD)
data <- data %>%
mutate(across(all_of(col.list),
~scale_mad(.x),
.names = "{col}.mad"))
# Rename col.list with the MAD extension
col.list <- paste0(col.list, ".mad")Why combine the intensity and duration scores? Should we? For a discussion, see:
Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT to measure aggressive behavior: the unstandardized use of the competitive reaction time task in aggression research. Psychological assessment, 26(2), 419. https://doi.org/10.1037/a0035569
Why use the first sound blast only instead of the average of all trials? Should we?
According to some, the Taylor Aggression Paradigm is not a measure of aggression per say, but of reactive aggression, because participants react to the other “participant’s” aggression. They suggest that for a pure measure of aggression, it is recommended to use only the first sound blast used by the participant before he receives one himself. At this stage, we attempt the analyses with all these different measures of aggression for exploratory purposes. See earlier reference to Elson et al. (2014):
In this section, we will: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, (d) identify and winsorize outliers, and (e) conduct the t-tests.
# Group normality
sapply(col.list, function(x)
nice_normality(data,
x,
"condition",
shapiro = TRUE,
title = x),
USE.NAMES = TRUE,
simplify = FALSE)> $blastintensity.mad
>
> $blastduration.mad
>
> $blastintensity.duration.mad
>
> $blastintensity.first.mad
>
> $blastduration.first.mad
>
> $KIMS.mad
>
> $BSCS.mad
>
> $BAQ.mad
>
> $SOPT.mad
>
> $IAT.mad
Everything looks good except for the SOPT variable, which is clearly left skewed. We will have no choice but to apply a transformation.
Normally, the SOPT raw scores represent the number of errors, but I had multiplied it by -1 initially so that a smaller score would mean lower working memory capacity. Here we reverse it again to be able to use the various transformations.
We also add a constant of 1 to avoid scores of zero which can interfere with the transformation. We will use the Box-Cox transformation to be able to specify an optimal transformation ratio.
# Reverse
#data$SOPT.r <- data$SOPT*-1
# data$SOPT.r <- data$SOPT.mad*-1
# Add constant
# data$SOPT.r <- data$SOPT.r + 1
data$SOPT.r <- data$SOPT.mad + 7
# Box–Cox transformation
#data$SOPT.cox = BoxCox(data$SOPT.r, lambda = 0.25)
# data$SOPT.mad.cox = BoxCox(data$SOPT.r, lambda = 0.25)
# We use an optimal lambda of 0.25 based on trial and error to make both groups look as normal as possible.
data$SOPT.mad.cox = BoxCox(data$SOPT.r, lambda = 3.5)
# Check normality again
nice_normality(data,
"SOPT.mad.cox",
"condition",
shapiro = TRUE)# Update col.list
col.list[charmatch("SOPT", col.list)] <-
paste0(col.list[charmatch("SOPT", col.list)], ".cox")The normality problem is now solved for our SOPT variable. We can now resume with the next step: checking variance.
# Plotting variance
plots(lapply(col.list, function(x) {
nice_varplot(data, x, group = "condition")
}),
n_columns = 3)Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.
# Using boxplots
plots(lapply(col.list, function(x) {
ggplot(data, aes(condition, !!sym(x))) +
geom_boxplot()
}),
n_columns = 3)There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.
find_mad(data, col.list, "manualworkerId", criteria = 3)> 11 outliers based on 3 median absolute deviations.
> $blastintensity.mad
> [1] Row manualworkerId blastintensity.mad
> <0 rows> (or 0-length row.names)
>
> $blastduration.mad
> [1] Row manualworkerId blastduration.mad
> <0 rows> (or 0-length row.names)
>
> $blastintensity.duration.mad
> Row manualworkerId blastintensity.duration.mad
> 1 45 A1PHDT66U6IK4Q 3.2
> 2 47 A1QRX4YCTC5ADW 3.7
> 3 73 A27ATGMMHBO50Z 3.3
> 4 144 A37Z5PZ4B7VO49 3.7
> 5 154 A3CLVX6GRHIY29 3.7
> 6 200 A5B73N9C2HVEG 3.7
> 7 223 AK4PVAUX4PWT1 3.5
> 8 239 AW5O1RK3W60FC 3.6
> 9 242 AXBBXKSC9VFSW 3.7
>
> $blastintensity.first.mad
> [1] Row manualworkerId blastintensity.first.mad
> <0 rows> (or 0-length row.names)
>
> $blastduration.first.mad
> [1] Row manualworkerId blastduration.first.mad
> <0 rows> (or 0-length row.names)
>
> $KIMS.mad
> [1] Row manualworkerId KIMS.mad
> <0 rows> (or 0-length row.names)
>
> $BSCS.mad
> [1] Row manualworkerId BSCS.mad
> <0 rows> (or 0-length row.names)
>
> $BAQ.mad
> Row manualworkerId BAQ.mad
> 1 20 A19WXHQSBB6P6O 3.1
> 2 242 AXBBXKSC9VFSW 3.4
>
> $SOPT.mad.cox
> [1] Row manualworkerId SOPT.mad.cox
> <0 rows> (or 0-length row.names)
>
> $IAT.mad
> Row manualworkerId IAT.mad
> 1 54 A1VBKV0KJ1FOUC 3
# 17 people using the MAD method
# 37 on MAD standardized dataVisual assessment and the MAD method confirm we have some outlier values. We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:
Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013
# Winsorize variables of interest with MAD
data <- data %>%
mutate(across(all_of(col.list),
winsorize_mad,
.names = "{.col}.w"))
# Update col.list
col.list <- paste0(col.list, ".w")Outliers are still present but were brought back within reasonable limits, where applicable. We are now ready to compare the group condition (Control vs. Mindfulness Priming) across our different variables with the t-tests.
nice_t_test(data,
response = col.list,
group = "condition") %>%
nice_table(highlight = 0.10)> Using Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
> For the Student t-test, use `var.equal = TRUE`.
>
>
Dependent Variable | t | df | p | d | 95% CI |
blastintensity.mad.w | -1.76 | 234.38 | .079 | -0.23 | [-0.48, 0.03] |
blastduration.mad.w | -0.73 | 237.03 | .469 | -0.09 | [-0.34, 0.16] |
blastintensity.duration.mad.w | -1.67 | 227.85 | .096 | -0.22 | [-0.47, 0.04] |
blastintensity.first.mad.w | -0.05 | 240.66 | .957 | -0.01 | [-0.26, 0.24] |
blastduration.first.mad.w | 0.25 | 241.71 | .800 | 0.03 | [-0.22, 0.28] |
KIMS.mad.w | -0.91 | 238.40 | .363 | -0.12 | [-0.37, 0.13] |
BSCS.mad.w | -0.56 | 241.74 | .578 | -0.07 | [-0.32, 0.18] |
BAQ.mad.w | 1.34 | 242.55 | .181 | 0.17 | [-0.08, 0.42] |
SOPT.mad.cox.w | -0.35 | 236.76 | .725 | -0.05 | [-0.3, 0.21] |
IAT.mad.w | 1.33 | 242.70 | .184 | 0.17 | [-0.08, 0.42] |
Interpretation: There is no clear group effect from our experimental condition on our different variables. However, there is a marginal effect of condition on blast intensity, whereas the mindfulness group has slightly higher blast intensity than the control group. Let’s visualize this effect.
nice_violin(data,
group = "condition",
response = "blastintensity.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)nice_violin(data,
group = "condition",
response = "blastduration.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)nice_violin(data,
group = "condition",
response = "blastintensity.duration.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)Let’s extract the means and standard deviations for journal reporting.
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity),
SD = sd(blastintensity),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 4.38 | 2.32 | 128 |
Mindfulness | 4.93 | 2.57 | 117 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastduration),
SD = sd(blastduration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 1,019.01 | 440.26 | 128 |
Mindfulness | 1,061.38 | 471.63 | 117 |
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.duration),
SD = sd(blastintensity.duration),
N = n()) %>%
nice_table(width = 0.40)condition | M | SD | N |
Control | 5,368.70 | 4,245.46 | 128 |
Mindfulness | 6,356.67 | 5,041.47 | 117 |
Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.
big.mod1 <- lm(blastintensity.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod1)The model assumptions look really good actually, even with all these variables [less true now with the standardization and winsorization!]. Let’s look at the results.
big.mod2 <- lm(blastduration.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod2)The model assumptions look really good actually, even with all these variables [less true now with the standardization and winsorization!]. Let’s look at the results.
big.mod3 <- lm(blastintensity.duration.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod3)The model assumptions look really good actually, even with all these variables [less true now with the standardization and winsorization!]. Let’s look at the results.
big.mod1 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.mad.w | condition_dum | 233 | 0.50 | 1.67 | .097 | .01 |
blastintensity.mad.w | KIMS.mad.w | 233 | 0.15 | 1.31 | .193 | .01 |
blastintensity.mad.w | BSCS.mad.w | 233 | -0.16 | -1.38 | .168 | .01 |
blastintensity.mad.w | BAQ.mad.w | 233 | 0.14 | 1.27 | .204 | .01 |
blastintensity.mad.w | SOPT.mad.cox.w | 233 | -0.00 | -2.62 | .009 | .02 |
blastintensity.mad.w | IAT.mad.w | 233 | 0.17 | 1.96 | .051 | .01 |
blastintensity.mad.w | condition_dum:KIMS.mad.w | 233 | -0.23 | -1.41 | .160 | .01 |
blastintensity.mad.w | condition_dum:BSCS.mad.w | 233 | 0.58 | 3.26 | .001 | .04 |
blastintensity.mad.w | condition_dum:BAQ.mad.w | 233 | 0.04 | 0.24 | .809 | .00 |
blastintensity.mad.w | condition_dum:SOPT.mad.cox.w | 233 | -0.00 | -0.69 | .493 | .00 |
blastintensity.mad.w | condition_dum:IAT.mad.w | 233 | -0.16 | -1.25 | .212 | .01 |
Interpretation: There is one significant interaction: condition by trait self-control (brief self-control scale, BSCS). However, there are marginally significant interactions, with BAQ and IAT (as expected), but not with SOPT.
Interpretation: Again, there seems to be an interaction between condition and self-control. However, there are also effects of the IAT, of SOPT, and of condition. [Note: it seems that after standardizing and winsorizing, the effects of condition and IAT go from significant to marginally significant only.]
big.mod2 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastduration.mad.w | condition_dum | 233 | 0.30 | 0.95 | .345 | .00 |
blastduration.mad.w | KIMS.mad.w | 233 | 0.15 | 1.28 | .200 | .01 |
blastduration.mad.w | BSCS.mad.w | 233 | -0.17 | -1.41 | .160 | .01 |
blastduration.mad.w | BAQ.mad.w | 233 | 0.06 | 0.49 | .624 | .00 |
blastduration.mad.w | SOPT.mad.cox.w | 233 | -0.00 | -3.38 | .001 | .04 |
blastduration.mad.w | IAT.mad.w | 233 | 0.25 | 2.80 | .005 | .03 |
blastduration.mad.w | condition_dum:KIMS.mad.w | 233 | -0.28 | -1.69 | .093 | .01 |
blastduration.mad.w | condition_dum:BSCS.mad.w | 233 | 0.71 | 3.80 | < .001 | .05 |
blastduration.mad.w | condition_dum:BAQ.mad.w | 233 | 0.15 | 0.89 | .375 | .00 |
blastduration.mad.w | condition_dum:SOPT.mad.cox.w | 233 | -0.00 | -0.42 | .672 | .00 |
blastduration.mad.w | condition_dum:IAT.mad.w | 233 | -0.18 | -1.41 | .161 | .01 |
big.mod3 %>%
nice_lm() %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor | df | b | t | p | sr2 |
blastintensity.duration.mad.w | condition_dum | 233 | 0.52 | 1.71 | .088 | .01 |
blastintensity.duration.mad.w | KIMS.mad.w | 233 | 0.10 | 0.84 | .402 | .00 |
blastintensity.duration.mad.w | BSCS.mad.w | 233 | -0.14 | -1.26 | .210 | .01 |
blastintensity.duration.mad.w | BAQ.mad.w | 233 | 0.13 | 1.14 | .254 | .00 |
blastintensity.duration.mad.w | SOPT.mad.cox.w | 233 | -0.00 | -2.53 | .012 | .02 |
blastintensity.duration.mad.w | IAT.mad.w | 233 | 0.17 | 1.93 | .054 | .01 |
blastintensity.duration.mad.w | condition_dum:KIMS.mad.w | 233 | -0.16 | -0.96 | .336 | .00 |
blastintensity.duration.mad.w | condition_dum:BSCS.mad.w | 233 | 0.58 | 3.27 | .001 | .04 |
blastintensity.duration.mad.w | condition_dum:BAQ.mad.w | 233 | 0.03 | 0.20 | .840 | .00 |
blastintensity.duration.mad.w | condition_dum:SOPT.mad.cox.w | 233 | -0.00 | -0.82 | .411 | .00 |
blastintensity.duration.mad.w | condition_dum:IAT.mad.w | 233 | -0.13 | -1.05 | .296 | .00 |
Let’s plot the main significant interaction(s).
# Plot
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)Interpretation: For people with low self-control, the priming mindfulness condition (cond = 1) seems to have little effect on blast intensity relative to the control condition (cond = 0). In contrast, for people with high self-control, the priming mindfulness condition relates to higher blast intensity.
# Plot
interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)Interpretation: For people with low self-control, the priming mindfulness condition (cond = 1) seems to have little effect on blast intensity relative to the control condition (cond = 0). In contrast, for people with high self-control, the priming mindfulness condition relates to higher blast intensity.
# Plot
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)Interpretation: For people with low self-control, the priming mindfulness condition (cond = 1) seems to have little effect on blast intensity relative to the control condition (cond = 0). In contrast, for people with high self-control, the priming mindfulness condition relates to higher blast intensity.
Let’s look at the simple slopes now (only for the significant interaction).
big.mod1 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.mad.w") %>%
nice_table(highlight = TRUE)Dependent Variable | Predictor (+/-1 SD) | df | b | t | p | sr2 |
blastintensity.mad.w | condition_dum (LOW-BSCS.mad.w) | 233 | -0.04 | -0.11 | .915 | .00 |
blastintensity.mad.w | condition_dum (MEAN-BSCS.mad.w) | 233 | 0.50 | 1.67 | .097 | .01 |
blastintensity.mad.w | condition_dum (HIGH-BSCS.mad.w) | 233 | 1.05 | 3.03 | .003 | .03 |
Interpretation: The effect of priming mindfulness on blast intensity is only significant for people with a high self-control.
Based on the absence of effect for blast duration or the interaction between blast intensity and blast duration, I would suggest modifying the Taylor task to remove the blast duration prompt as to only keep blast intensity. It is one of the specifiable option in the task parameters so it is an easy change. I believe doing so will provide a cleaner and more straightforward task for participants.
The full script of executive code contained in this document is reproduced here.
# Set up the environment (or use local alternative `source("utils/config.R")`)
source("utils/config.R")
fast <- FALSE # Make this false to skip the chunks
# This chunk is a bit complex so don't worry about it: it's made to add badges to the HTML versions
# NOTE: You have to replace the links accordingly to have working "buttons" on the HTML versions
if (!knitr::is_latex_output() && knitr::is_html_output()) {
cat("[](https://github.com/rempsyc/rempsyc/actions)]
[](https://realitybending.github.io/TemplateResults/)
[](https://github.com/RealityBending/TemplateResults/raw/main/word_and_pdf/SupplementaryMaterials.docx)
[](https://github.com/RealityBending/TemplateResults/blob/main/word_and_pdf/SupplementaryMaterials.pdf)")
}
library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(patchwork)
library(ggplot2)
library(rstatix)
library(forecast)
library(DescTools)
library(report)
source("D:/UQAM/Groupe Étudiant @Projet Remi - Documents/Etude 1 -- priming-agression/Analyses - Priming 1/Scripts R/mad functions.R")
summary(report::report(sessionInfo()))
df <- read.csv("data/data.csv")
data <- read.csv("data/fulldataset.csv")
cat(paste("The data consists of",
report::report_participants(data),
". There are no demographics available."))
# Dummy-code group variable
data <- data %>%
mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
condition = as.factor(condition))
# Allocation ratio
cat(paste("The allocation ratio is: ",
report::report(data$condition)))
# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
"blastintensity.first", "blastduration.first", "KIMS",
"BSCS", "BAQ", "SOPT", "IAT")
# Create new variable blastintensity.duration
data$blastintensity.duration <- (data$blastintensity * data$blastduration)
# Divide by 2? Do some other sort of transformation given I multiplied two scores?
# Should I multiply them after standardization or before?
# Standardize and center main continuous IV variable (based on MAD)
data <- data %>%
mutate(across(all_of(col.list),
~scale_mad(.x),
.names = "{col}.mad"))
# Rename col.list with the MAD extension
col.list <- paste0(col.list, ".mad")
report::cite_packages(sessionInfo())
# Group normality
sapply(col.list, function(x)
nice_normality(data,
x,
"condition",
shapiro = TRUE,
title = x),
USE.NAMES = TRUE,
simplify = FALSE)
# Reverse
#data$SOPT.r <- data$SOPT*-1
# data$SOPT.r <- data$SOPT.mad*-1
# Add constant
# data$SOPT.r <- data$SOPT.r + 1
data$SOPT.r <- data$SOPT.mad + 7
# Box–Cox transformation
#data$SOPT.cox = BoxCox(data$SOPT.r, lambda = 0.25)
# data$SOPT.mad.cox = BoxCox(data$SOPT.r, lambda = 0.25)
# We use an optimal lambda of 0.25 based on trial and error to make both groups look as normal as possible.
data$SOPT.mad.cox = BoxCox(data$SOPT.r, lambda = 3.5)
# Check normality again
nice_normality(data,
"SOPT.mad.cox",
"condition",
shapiro = TRUE)
# Update col.list
col.list[charmatch("SOPT", col.list)] <-
paste0(col.list[charmatch("SOPT", col.list)], ".cox")
# Plotting variance
plots(lapply(col.list, function(x) {
nice_varplot(data, x, group = "condition")
}),
n_columns = 3)
# Using boxplots
plots(lapply(col.list, function(x) {
ggplot(data, aes(condition, !!sym(x))) +
geom_boxplot()
}),
n_columns = 3)
find_mad(data, col.list, "manualworkerId", criteria = 3)
# 17 people using the MAD method
# 37 on MAD standardized data
# Winsorize variables of interest with MAD
data <- data %>%
mutate(across(all_of(col.list),
winsorize_mad,
.names = "{.col}.w"))
# Update col.list
col.list <- paste0(col.list, ".w")
nice_t_test(data,
response = col.list,
group = "condition") %>%
nice_table(highlight = 0.10)
nice_violin(data,
group = "condition",
response = "blastintensity.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)
nice_violin(data,
group = "condition",
response = "blastduration.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)
nice_violin(data,
group = "condition",
response = "blastintensity.duration.mad.w",
comp1 = 1,
comp2 = 2,
obs = TRUE)
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity),
SD = sd(blastintensity),
N = n()) %>%
nice_table(width = 0.40)
data %>%
group_by(condition) %>%
summarize(M = mean(blastduration),
SD = sd(blastduration),
N = n()) %>%
nice_table(width = 0.40)
data %>%
group_by(condition) %>%
summarize(M = mean(blastintensity.duration),
SD = sd(blastintensity.duration),
N = n()) %>%
nice_table(width = 0.40)
big.mod1 <- lm(blastintensity.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod1)
big.mod2 <- lm(blastduration.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod2)
big.mod3 <- lm(blastintensity.duration.mad.w ~ condition_dum*KIMS.mad.w +
condition_dum*BSCS.mad.w + condition_dum*BAQ.mad.w +
condition_dum*SOPT.mad.cox.w + condition_dum*IAT.mad.w,
data = data, na.action="na.exclude")
check_model(big.mod3)
big.mod1 %>%
nice_lm() %>%
nice_table(highlight = TRUE)
big.mod2 %>%
nice_lm() %>%
nice_table(highlight = TRUE)
big.mod3 %>%
nice_lm() %>%
nice_table(highlight = TRUE)
# Plot
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)
# Plot
interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)
# Plot
interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.mad.w",
modxvals = NULL, interval = TRUE)
big.mod1 %>%
nice_lm_slopes(predictor = "condition_dum",
moderator = "BSCS.mad.w") %>%
nice_table(highlight = TRUE)report::cite_packages(sessionInfo())